深层神经网络目前提供了最先进,最精确的机器学习模型,以区分患有阿尔茨海默氏病和健康对照的受试者的结构MRI扫描。不幸的是,由于这些多层和非线性模型的复杂性,这些模型捕获的微妙的大脑改变很难解释。已经提出了几种热图方法来解决此问题并分析从深神经网络中提取的成像模式,但是到目前为止,尚未对这些方法进行定量比较。在这项工作中,我们通过从ADNI数据集的T1 MRI扫描中得出卷积神经网络(CNN)的热图来探讨这些问题,并通过将这些热图与对应于支持向量机(SVM)系数的脑图进行比较。研究了三种突出的热图方法:层次相关性传播(LRP),综合梯度(IG)和引导GRAD-CAM(GGC)。与先前在视觉上或定性评估热图的质量的研究相反,我们通过与大型荟萃分析的地面图相重叠,从而获得了精确的定量措施,该量度合并了77个基于Voxel的形态计量学(VBM)研究,独立于ADNI。我们的结果表明,所有三个热图方法都能够捕获涵盖荟萃分析图的大脑区域,并获得了比SVM系数更好的结果。其中,IG产生了与独立荟萃分析的最佳重叠的热图。
translated by 谷歌翻译
在过去几年中,已经开发了许多“可解释的人工智能”(XAI)方法,但这些方法并不总是客观地评估。为了评估各种显着性方法产生的热量表的质量,我们开发了一种框架,以产生具有合成病变和已知地面真理图的人工数据。使用此框架,我们评估了两个数据集,其中两个数据集,佩林噪声和2D脑MRI切片,发现Heatmaps在显着性方法和背景之间变化很大。在应用这些框架之前,我们强烈建议在临床或其他安全关键环境中使用此框架进一步评估显着性图和XAI方法。
translated by 谷歌翻译
Named Entity Recognition (NER) is an important and well-studied task in natural language processing. The classic CoNLL-2003 English dataset, published almost 20 years ago, is commonly used to train and evaluate named entity taggers. The age of this dataset raises the question of how well these models perform when applied to modern data. In this paper, we present CoNLL++, a new annotated test set that mimics the process used to create the original CoNLL-2003 test set as closely as possible, except with data collected from 2020. Using CoNLL++, we evaluate the generalization of 20+ different models to modern data. We observe that different models have very different generalization behavior. F\textsubscript{1} scores of large transformer-based models which are pre-trained on recent data dropped much less than models using static word embeddings, and RoBERTa-based and T5 models achieve comparable F\textsubscript{1} scores on both CoNLL-2003 and CoNLL++. Our experiments show that achieving good generalizability requires a combined effort of developing larger models and continuing pre-training with in-domain and recent data. These results suggest standard evaluation methodology may have under-estimated progress on named entity recognition over the past 20 years; in addition to improving performance on the original CoNLL-2003 dataset, we have also improved the ability of our models to generalize to modern data.
translated by 谷歌翻译
We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that violate relevant policies. Our approach extracts structured representations of check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. Using our baseline system, we show that human fact-checkers can identify 124 tweets per hour that violate Twitter's policies on COVID-19 misinformation. We will make our code, data, and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw user-generated content.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
Modeling perception sensors is key for simulation based testing of automated driving functions. Beyond weather conditions themselves, sensors are also subjected to object dependent environmental influences like tire spray caused by vehicles moving on wet pavement. In this work, a novel modeling approach for spray in lidar data is introduced. The model conforms to the Open Simulation Interface (OSI) standard and is based on the formation of detection clusters within a spray plume. The detections are rendered with a simple custom ray casting algorithm without the need of a fluid dynamics simulation or physics engine. The model is subsequently used to generate training data for object detection algorithms. It is shown that the model helps to improve detection in real-world spray scenarios significantly. Furthermore, a systematic real-world data set is recorded and published for analysis, model calibration and validation of spray effects in active perception sensors. Experiments are conducted on a test track by driving over artificially watered pavement with varying vehicle speeds, vehicle types and levels of pavement wetness. All models and data of this work are available open source.
translated by 谷歌翻译
In this work, we propose a novel image reconstruction framework that directly learns a neural implicit representation in k-space for ECG-triggered non-Cartesian Cardiac Magnetic Resonance Imaging (CMR). While existing methods bin acquired data from neighboring time points to reconstruct one phase of the cardiac motion, our framework allows for a continuous, binning-free, and subject-specific k-space representation.We assign a unique coordinate that consists of time, coil index, and frequency domain location to each sampled k-space point. We then learn the subject-specific mapping from these unique coordinates to k-space intensities using a multi-layer perceptron with frequency domain regularization. During inference, we obtain a complete k-space for Cartesian coordinates and an arbitrary temporal resolution. A simple inverse Fourier transform recovers the image, eliminating the need for density compensation and costly non-uniform Fourier transforms for non-Cartesian data. This novel imaging framework was tested on 42 radially sampled datasets from 6 subjects. The proposed method outperforms other techniques qualitatively and quantitatively using data from four and one heartbeat(s) and 30 cardiac phases. Our results for one heartbeat reconstruction of 50 cardiac phases show improved artifact removal and spatio-temporal resolution, leveraging the potential for real-time CMR.
translated by 谷歌翻译
This article presents a survey of literature in the area of Human-Robot Interaction (HRI), specifically on systems containing more than two agents (i.e., having multiple humans and/or multiple robots). We identify three core aspects of ``Multi-agent" HRI systems that are useful for understanding how these systems differ from dyadic systems and from one another. These are the Team structure, Interaction style among agents, and the system's Computational characteristics. Under these core aspects, we present five attributes of HRI systems, namely Team size, Team composition, Interaction model, Communication modalities, and Robot control. These attributes are used to characterize and distinguish one system from another. We populate resulting categories with examples from recent literature along with a brief discussion of their applications and analyze how these attributes differ from the case of dyadic human-robot systems. We summarize key observations from the current literature, and identify challenges and promising areas for future research in this domain. In order to realize the vision of robots being part of the society and interacting seamlessly with humans, there is a need to expand research on multi-human -- multi-robot systems. Not only do these systems require coordination among several agents, they also involve multi-agent and indirect interactions which are absent from dyadic HRI systems. Adding multiple agents in HRI systems requires advanced interaction schemes, behavior understanding and control methods to allow natural interactions among humans and robots. In addition, research on human behavioral understanding in mixed human-robot teams also requires more attention. This will help formulate and implement effective robot control policies in HRI systems with large numbers of heterogeneous robots and humans; a team composition reflecting many real-world scenarios.
translated by 谷歌翻译
Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. Recently, a few efforts have utilized a simple mark-then-translate method to jointly perform translation and projection by inserting special markers around the labeled spans in the original sentence. However, as far as we are aware, no empirical analysis has been conducted on how this approach compares to traditional annotation projection based on word alignment. In this paper, we present an extensive empirical study across 42 languages and three tasks (QA, NER, and Event Extraction) to evaluate the effectiveness and limitations of both methods, filling an important gap in the literature. Experimental results show that our optimized version of mark-then-translate, which we call EasyProject, is easily applied to many languages and works surprisingly well, outperforming the more complex word alignment-based methods. We analyze several key factors that affect end-task performance, and show EasyProject works well because it can accurately preserve label span boundaries after translation. We will publicly release all our code and data.
translated by 谷歌翻译
This technical report presents GPS++, the first-place solution to the Open Graph Benchmark Large-Scale Challenge (OGB-LSC 2022) for the PCQM4Mv2 molecular property prediction task. Our approach implements several key principles from the prior literature. At its core our GPS++ method is a hybrid MPNN/Transformer model that incorporates 3D atom positions and an auxiliary denoising task. The effectiveness of GPS++ is demonstrated by achieving 0.0719 mean absolute error on the independent test-challenge PCQM4Mv2 split. Thanks to Graphcore IPU acceleration, GPS++ scales to deep architectures (16 layers), training at 3 minutes per epoch, and large ensemble (112 models), completing the final predictions in 1 hour 32 minutes, well under the 4 hour inference budget allocated. Our implementation is publicly available at: https://github.com/graphcore/ogb-lsc-pcqm4mv2.
translated by 谷歌翻译